Last week we were informed of the information that Snap has acquired NextMind, a brain-interface device manufacturer (see also in RoadToVR). This is another step in a long chain of cross acquisitions between AR and Brain HCI. I wanted to explain why I am a little excited about this new path.
“Why would a messaging app be interested in brain control?”
Snap’s main product is a messaging app called Snapchat. Why would a messaging app need research about brain waves? A note about Snap’s interest.
Snap is not new to Augmented Reality. Snapchat support to AR objects on top of images was a great success, and lead other messaging apps to join this path. Since then, Snap has opened a full scale AR operation, built a creator studio that allows users to generate AR content very simply, and even became the producer of stylish spectacles (now on its second generation).
The NextMind operation falls under the Specatcles department, and let me explain why.
“Augmented Reality devices need to be light-weight. That’s not enough, we need lighter.”
Back in 2007, when I first saw augmented reality technology, I immediately felt this is no ordinary tech that will disappear. This will change our lives, and affect all aspects of it.
I presented these use cases in several conferences and even proved one of them with my thesis work released in 2010. That project presented a step by step instructions on how to activate a machine.
But from early experiments, even though the hardware was already the lightest that there was at the time, even compared to today’s devices, users said it was too heavy.
“Augmented reality interactions require new interfaces”
From its core concept, augmented reality devices have no button clicks as a means to allow user selection and browsing options. Our thesis researched those directions as well. We considered gesture recognition, voice recognition, and of course, brain control. But in 2010, brain interface was still in its early phases. It was only enabling a limited number of responses from pre-defined options. It also had too big error rate in detection. Therefore, it was not included into the thesis, although we were quite sure that this technology will rise someday. In this article, I would like to provide several reasons for why I am so confident about this.
Why Combine Brain Interface with Augmented Reality?
1. Brain interface is silent
We interact with AR device in everyday situations: while waiting to the bus, during dinner, while driving. Having an interaction with a silent interface that does not require using voice or body gestures, is a significant social advantage.
The innovative and inspiring video Sight shows this type of social situation during a romantic date.
2. Brain interface requires a sensor in the back of the head
The brain detection device has its core weight on the back of the head. Several manufacturers have created a helmet that wraps the head as well.
AR devices have their biggest weight in the front of the face. These manufacturers have also placed extra weight around the head in order to balance the front size.
Combining both devices might create a balanced device with less excess and unnecessary weight. 2 birds in one…
3. Brain interface makes other techs redundant
Nowadays, the most common ways to interact with the AR device are: detecting hand gestures together with the face direction, and eye gae detection. We can find also voice commands, but its less commonly used since often the environment we are in does not allow that, such as a loud factory.
Although I do not expect manufacturers to make the brain controller as a main functionality at the moment, I do expect this to happen once the detection is perfected.
And why?
Since computer vision requires a lot of computing power. Actually, each new interface requires that. Having ALL interfaces on the device, means having a stronger computer on the device with a stronger battery, etc. At the end of the day, all this is translated into heavier devices. That’s why we call them Headsets now rather than simple glasses…
Our users want the most light-weight product ever. Therefore we need to design something that is simple to use, with one tech only. Using a high accuracy of brain HCI, would do the job. BUT we are not there yet. More research is needed.
So, Why Am I So Happy?
Because only tech giants are able to perform and support the amount of research needed to reach high accuracy. These purchases are a sign that the direction is being investigated.
Potential ethical issues presented by tech
Having said that, tech giants also provide a danger. Brain detection, as well as augmented reality, are both technologies that can have significant effects on human perception. Therefore they can also twist our reality.
There are ethical implications to that situation. If legislation will not arrive fast enough, developers will release products that offer potential risks, such as mind control. I am talking about mind control as means of controlling the information arriving to your doorstep, in a more accurate way and not using only advertisements.